AI Training Challenges Intensify as Models Outpace Human Expertise
Anthropic and Microsoft-backed OpenAI are pushing the boundaries of AI training with increasingly sophisticated methods. The companies employ synthetic environments—dubbed 'gyms'—alongside domain experts to refine models like GPT-5. Yet human trainers report diminishing returns: tasks that once stumped AI now require PhD-level complexity to challenge the systems.
Specialists note a stark decline in identifiable gaps. A linguist who previously flagged three weekly shortcomings in OpenAI's o3 model now finds just one or two in GPT-5. While STEM fields still yield some training opportunities, even advanced chemistry queries—requiring cross-referenced molecular analysis and computational restructuring—are being mastered.